Cybercriminals utilizing AI to commit cybercrimes | HCLTech

Cybercriminals utilizing AI to commit cybercrimes

The rise of publicly available AI tools has also led to the rise of AI-related cybercrimes and an increase the effectiveness of scams
 
5 minutes read
Jordan Smith
Jordan Smith
US Reporter, HCLTech
5 minutes read
Cybercriminals utilizing AI to commit cybercrimes

A hot topic in the IT space the past few months has undoubtedly been the rise of generative AI and publicly available AI tools like ChatGPT. The benefits of AI are being felt across industries, by all types of companies and customers, but not all A.I. is used responsibly.

When a new technology arises, it’s always necessary to look at the other side of the coin, as the positives aren’t always without negatives.

Recently, cybercriminals have been using AI as a tool to make scams easier to believe. The unethical use of tools like ChatGPT are keeping cybersecurity professionals on their toes.

So far, AI voice scams have built on classic schemes like phishing with its easy use and the little knowledge of AI expertise required to operate the tools. Beyond that, cybercriminals can utilize AI to write malware without coding experience, crack passwords, analyze stolen data and find software vulnerabilities.

AI and social engineering

Robocalling scams and phishing emails are nothing new to cybercriminals. But according to researchers at McAfee thanks to generative AI tools, cybercriminals need as little as three seconds of someone’s voice to successfully clone it and use it in a scam call.

These robocalls in the past would often imitate health care providers, the IRS or someone claiming to extend your auto warranty. But now AI-enabled voice-cloning services are making it appear like these calls are coming from a loved one. Further, these tools can allow threat actors to respond in real time as they type responses in their voice-cloning applications.

The McAfee research also found that 77% of victims in AI-enabled scam calls report that they have lost money with over a third of those victims have lost more than $1,000.

Protecting against AI-enabled cyber crimes

In a New York Times op-ed, US Federal Trade Commission chair Lina Khan said that despite the full extent of generative AI’s potential being up for debate, there’s little doubt that it will be highly disruptive.

She added that for the US to maintain pace with these technologies,  the right policy choices must be made.

“When enforcing the law’s prohibition on deceptive practices, we will look not just at the fly-by-night scammers deploying these tools, but also at the upstream firms that are enabling them,” Khan wrote in the piece.

Along with its research, McAfee also provided some steps to protect against AI voice clone attacks that you can begin using today, including:

  1. Establishing a verbal codeword with kids, family members or trusted close friends, much like a financial institution or alarm company, and use it in messages when asking for help.
  2. Always question the source and determine if the caller sounds like it’s who you think it is. Hang up and call the person directly if you have doubts.
  3. Think before you share on social media. The wider your base of connections, the more risk you could be exposed to.  Be sure your social media content isn’t available to the greater public and only exposed to those who are your friends and family.
  4. Protect your identity through identity monitoring services.
  5. Clear your name from data broker sites that buy, collect and sell detailed personal information that is collected from several public and private sources.

Understanding AI as an organization

We generate incredible amounts of data every day and the organizations with well-laid out strategies to capture, process and monetize data are destined to be the leaders in creating customer satisfaction and improving efficiency.

HCLTech’s NEXT.ai is a service that provides cutting-edge solutions for the engineering R&D industry. While it’s difficult to monitor and protect against AI-driven cybercrimes, NEXT.ai is a service that helps organizations understand the power of AI and how their customers could be impacted.

Organizations adopting AI in all aspects of their journey are expected to achieve market leadership relative to those who do not. HCLTech is a partner to help achieve, strategize and implement industry leading AI solutions.

Additionally, HCLTech provides IoT Security to help counter ransomware attacks and maintain dynamic cybersecurity. Interconnected devices and digital adoption have increased so greater exposure to risk and security challenges has increased, too. Further, HCLTech’s 360° SecureOT Framework enables organizations to assess, strategize, define, design and manage their OT landscape based on various industry accepted cybersecurity guidelines and standards.

Share On